234 research outputs found

    Submillimeter Astronomy

    Get PDF
    For submillimeter astronomy, particularly at 200m, the ARENA working group has proposed a 25 m telescope at the For submillimeter astronomy, particularly at 200”m, the ARENA working group has proposed a 25 m telescope at the Concordia station on Dome C. Issues related to this suggestion are reviewed

    Studying Turbulence using Doppler-broadened lines: Velocity Coordinate Spectrum

    Full text link
    We discuss a new technique for studying astrophysical turbulence that utilizes the statistics of Doppler-broadened spectral lines. The technique relates the power Velocity Coordinate Spectrum (VCS), i.e. the spectrum of fluctuations measured along the velocity axis in Position-Position-Velocity (PPV) data cubes available from observations, to the underlying power spectra of the velocity/density fluctuations. Unlike the standard spatial spectra, that are function of angular wavenumber, the VCS is a function of the velocity wave number k_v ~ 1/v. We show that absorption affects the VCS to a higher degree for small k_v and obtain the criteria for disregarding the absorption effects for turbulence studies at large k_v. We consider the retrieval of turbulence spectra from observations for high and low spatial resolution observations and find that the VCS allows one to study turbulence even when the emitting turbulent volume is not spatially resolved. This opens interesting prospects for using the technique for extragalactic research. We show that, while thermal broadening interferes with the turbulence studies using the VCS, it is possible to separate thermal and non-thermal contributions. This allows a new way of determining the temperature of the interstellar gas using emission and absorption spectral lines.Comment: 27 page, 3 figures, content extended and presentation reorganized to correspond to the version accepted to Ap

    Out-Of-Focus Holography at the Green Bank Telescope

    Get PDF
    We describe phase-retrieval holography measurements of the 100-m diameter Green Bank Telescope using astronomical sources and an astronomical receiver operating at a wavelength of 7 mm. We use the technique with parameterization of the aperture in terms of Zernike polynomials and employing a large defocus, as described by Nikolic, Hills & Richer (2006). Individual measurements take around 25 minutes and from the resulting beam maps (which have peak signal to noise ratios of 200:1) we show that it is possible to produce low-resolution maps of the wavefront errors with accuracy around a hundredth of a wavelength. Using such measurements over a wide range of elevations, we have calculated a model for the wavefront-errors due to the uncompensated gravitational deformation of the telescope. This model produces a significant improvement at low elevations, where these errors are expected to be the largest; after applying the model, the aperture efficiency is largely independent of elevation. We have also demonstrated that the technique can be used to measure and largely correct for thermal deformations of the antenna, which often exceed the uncompensated gravitational deformations during daytime observing. We conclude that the aberrations induced by gravity and thermal effects are large-scale and the technique used here is particularly suitable for measuring such deformations in large millimetre wave radio telescopes.Comment: 10 pages, 7 figures (accepted by Astronomy & Astrophysics

    PyCOOL - a Cosmological Object-Oriented Lattice code written in Python

    Full text link
    There are a number of different phenomena in the early universe that have to be studied numerically with lattice simulations. This paper presents a graphics processing unit (GPU) accelerated Python program called PyCOOL that solves the evolution of scalar fields in a lattice with very precise symplectic integrators. The program has been written with the intention to hit a sweet spot of speed, accuracy and user friendliness. This has been achieved by using the Python language with the PyCUDA interface to make a program that is easy to adapt to different scalar field models. In this paper we derive the symplectic dynamics that govern the evolution of the system and then present the implementation of the program in Python and PyCUDA. The functionality of the program is tested in a chaotic inflation preheating model, a single field oscillon case and in a supersymmetric curvaton model which leads to Q-ball production. We have also compared the performance of a consumer graphics card to a professional Tesla compute card in these simulations. We find that the program is not only accurate but also very fast. To further increase the usefulness of the program we have equipped it with numerous post-processing functions that provide useful information about the cosmological model. These include various spectra and statistics of the fields. The program can be additionally used to calculate the generated curvature perturbation. The program is publicly available under GNU General Public License at https://github.com/jtksai/PyCOOL . Some additional information can be found from http://www.physics.utu.fi/tiedostot/theory/particlecosmology/pycool/ .Comment: 23 pages, 12 figures; some typos correcte

    MYRIAD: A new N-body code for simulations of Star Clusters

    Full text link
    We present a new C++ code for collisional N-body simulations of star clusters. The code uses the Hermite fourth-order scheme with block time steps, for advancing the particles in time, while the forces and neighboring particles are computed using the GRAPE-6 board. Special treatment is used for close encounters, binary and multiple sub-systems that either form dynamically or exist in the initial configuration. The structure of the code is modular and allows the appropriate treatment of more physical phenomena, such as stellar and binary evolution, stellar collisions and evolution of close black-hole binaries. Moreover, it can be easily modified so that the part of the code that uses GRAPE-6, could be replaced by another module that uses other accelerating-hardware like the Graphics Processing Units (GPUs). Appropriate choice of the free parameters give a good accuracy and speed for simulations of star clusters up to and beyond core collapse. Simulations of Plummer models consisting of equal-mass stars reached core collapse at t~17 half-mass relaxation times, which compares very well with existing results, while the cumulative relative error in the energy remained below 0.001. Also, comparisons with published results of other codes for the time of core collapse for different initial conditions, show excellent agreement. Simulations of King models with an initial mass-function, similar to those found in the literature, reached core collapse at t~0.17, which is slightly smaller than the expected result from previous works. Finally, the code accuracy becomes comparable and even better than the accuracy of existing codes, when a number of close binary systems is dynamically created in a simulation. This is due to the high accuracy of the method that is used for close binary and multiple sub-systems.Comment: 24 pages, 29 figures, accepted for publication to Astronomy & Astrophysic

    A modified parallel tree code for N-body simulation of the Large Scale Structure of the Universe

    Full text link
    N-body codes to perform simulations of the origin and evolution of the Large Scale Structure of the Universe have improved significantly over the past decade both in terms of the resolution achieved and of reduction of the CPU time. However, state-of-the-art N-body codes hardly allow one to deal with particle numbers larger than a few 10^7, even on the largest parallel systems. In order to allow simulations with larger resolution, we have first re-considered the grouping strategy as described in Barnes (1990) (hereafter B90) and applied it with some modifications to our WDSH-PT (Work and Data SHaring - Parallel Tree) code. In the first part of this paper we will give a short description of the code adopting the Barnes and Hut algorithm \cite{barh86} (hereafter BH), and in particular of the memory and work distribution strategy applied to describe the {\it data distribution} on a CC-NUMA machine like the CRAY-T3E system. In the second part of the paper we describe the modification to the Barnes grouping strategy we have devised to improve the performance of the WDSH-PT code. We will use the property that nearby particles have similar interaction list. This idea has been checked in B90, where an interaction list is builded which applies everywhere within a cell C_{group} containing a little number of particles N_{crit}. B90 reuses this interaction list for each particle p∈Cgroup p \in C_{group} in the cell in turn. We will assume each particle p to have the same interaction list. Thus it has been possible to reduce the CPU time increasing the performances. This leads us to run simulations with a large number of particles (N ~ 10^7/10^9) in non-prohibitive times.Comment: 13 pages and 7 Figure

    Modification of Projected Velocity Power Spectra by Density Inhomogeneities in Compressible Supersonic Turbulence

    Full text link
    (Modified) The scaling of velocity fluctuation, dv, as a function of spatial scale L in molecular clouds can be measured from size-linewidth relations, principal component analysis, or line centroid variation. Differing values of the power law index of the scaling relation dv = L^(g3D) in 3D are given by these different methods: the first two give g3D=0.5, while line centroid analysis gives g3D=0. This discrepancy has previously not been fully appreciated, as the variation of projected velocity line centroid fluctuations (dv_{lc} = L^(g2D)) is indeed described, in 2D, by g2D=0.5. However, if projection smoothing is accounted for, this implies that g3D=0. We suggest that a resolution of this discrepancy can be achieved by accounting for the effect of density inhomogeneity on the observed g2D obtained from velocity line centroid analysis. Numerical simulations of compressible turbulence are used to show that the effect of density inhomogeneity statistically reverses the effect of projection smoothing in the case of driven turbulence so that velocity line centroid analysis does indeed predict that g2D=g3D=0.5. Using our numerical results we can restore consistency between line centroid analysis, principal component analysis and size-linewidth relations, and we derive g3D=0.5, corresponding to shock-dominated (Burgers) turbulence. We find that this consistency requires that molecular clouds are continually driven on large scales or are only recently formed.Comment: 28 pages total, 20 figures, accepted for publication in Ap

    Not an open cluster after all: the NGC 6863 asterism in Aquila

    Full text link
    Shortly after birth, open clusters start dissolving; gradually losing stars into the surrounding star field. The time scale for complete disintegration depends both on their initial membership and location within the Galaxy. Open clusters undergoing the terminal phase of cluster disruption or open cluster remnants (OCRs) are notoriously difficult to identify. From an observational point, a combination of low number statistics and minimal contrast against the general stellar field conspire to turn them into very challenging objects. To make the situation even worst, random samples of field stars often display features that may induce to classify them erroneously as extremely evolved open clusters. In this paper, we provide a detailed study of the stellar content and kinematics of NGC 6863, a compact group of a few stars located in Aquila and described by the POSS as a non existent cluster. Nonetheless, this object has been recently classified as OCR. The aim of the present work is to either confirm or disprove its OCR status by a detailed star-by-star analysis. The analysis is performed using wide-field photometry in the UBVI pass-band, proper motions from the UCAC3 catalogue, and high resolution spectroscopy as well as results from extensive NN-body calculations. Our results show that the four brightest stars commonly associated to NGC 6863 form an asterism, a group of non-physically associated stars projected together, leading to the conclusion that NGC 6863 is not a real open cluster.Comment: 10 pages, 8 eps figure, in press in Astronomy and Astrophysis. Abstract shortened to fit i

    Habitable Zones and UV Habitable Zones around Host Stars

    Full text link
    Ultraviolet radiation is a double-edged sword to life. If it is too strong, the terrestrial biological systems will be damaged. And if it is too weak, the synthesis of many biochemical compounds can not go along. We try to obtain the continuous ultraviolet habitable zones, and compare the ultraviolet habitable zones with the habitable zones of host stars. Using the boundary ultraviolet radiation of ultraviolet habitable zone, we calculate the ultraviolet habitable zones of host stars with masses from 0.08 to 4.00 \mo. For the host stars with effective temperatures lower than 4,600 K, the ultraviolet habitable zones are closer than the habitable zones. For the host stars with effective temperatures higher than 7,137 K, the ultraviolet habitable zones are farther than the habitable zones. For hot subdwarf as a host star, the distance of the ultraviolet habitable zone is about ten times more than that of the habitable zone, which is not suitable for life existence.Comment: 5 pages, 3 figure

    Hydra: A Parallel Adaptive Grid Code

    Full text link
    We describe the first parallel implementation of an adaptive particle-particle, particle-mesh code with smoothed particle hydrodynamics. Parallelisation of the serial code, ``Hydra'', is achieved by using CRAFT, a Cray proprietary language which allows rapid implementation of a serial code on a parallel machine by allowing global addressing of distributed memory. The collisionless variant of the code has already completed several 16.8 million particle cosmological simulations on a 128 processor Cray T3D whilst the full hydrodynamic code has completed several 4.2 million particle combined gas and dark matter runs. The efficiency of the code now allows parameter-space explorations to be performed routinely using 64364^3 particles of each species. A complete run including gas cooling, from high redshift to the present epoch requires approximately 10 hours on 64 processors. In this paper we present implementation details and results of the performance and scalability of the CRAFT version of Hydra under varying degrees of particle clustering.Comment: 23 pages, LaTex plus encapsulated figure
    • 

    corecore